Technalysis Research
 
Previous Blogs

March 17, 2015
The Smart Home Decade Dilemma

March 10, 2015
Apple Event Surprises

March 3, 2015
Flat Slab Finale?

February 26, 2015
Insider Extra: "Phablet" Impact Continues to Grow

February 24, 2015
Paying for Digital Privacy

February 19, 2015
Insider Extra: The Wire-Free PC

February 17, 2015
Whither Apple?

February 12, 2015
Insider Extra: The Real IOT Opportunity? Industry

February 10, 2015
Business Models For The Internet of Things (IOT)

February 5, 2015
Insider Extra: Is "Mobile Only" The Future?

February 3, 2015
Sexiest New Devices? PCs...

January 29, 2015
Insider Extra: iPhone Next

January 27, 2015
How Will Windows 10 Impact PCs and Tablets?

January 22, 2015
Insider Extra: Hands-On (or Heads-on) With HoloLens

January 20, 2015
Whither Windows 10?

January 15, 2015
Insider Extra: Mobile Security: The Key to a Successful BYOD Implementation

January 13, 2015
Smart Home Situation Likely To Get Worse Before It Gets Better

January 6, 2015
More Tech Predictions for 2015

December 30, 2014
Top 5 Tech Predictions for 2015

2014 Blogs


2013 Blogs

















TECHnalysis Research Blog Extra

March 19, 2015
Insider Extra: The Future of Computing is Invisible

By Bob O'Donnell

I’ve started to see where the future of computing is headed and, paradoxically, it’s invisible.

I say this because I’ve attended and read about several interesting events this past week that have forced me to put some serious thought into where computing is headed. From nVidia’s GPU Conference in San Jose, to the HSA (Heterogeneous System Architecture) Foundation’s release of its 1.0 spec, to even Microsoft’s unveiling of its Windows Hello biometric authentication support for Windows 10, this has been a fascinating week for thinking about where more advanced computing-based applications are going.

The theme that’s tying all these elements together is what I’m calling “invisible computing”, because the end result isn’t something that you can see or even directly engage with. If you think about most types of computing efforts—running an application, playing a game, watching a video, looking up data—there’s usually some kind of visual component that you’re presented with, either as the end result, or as a key element of the process.

The kinds of invisible computing that I’ve been hearing about this week, however, are interesting applications that work behind is being the scenes and only indirectly provide any kind of visual feedback. For example, nVidia’s CEO Jen-Hsun Huang spent a great deal of his keynote speech at the company’s GPU Conference on “deep learning.” The idea is that for applications like computer vision or autonomous driving, a great deal of behind-the-scenes computing, based on sensors and other real-world sources, is going on. These systems “learn” by crunching through that data input, and then they are better equipped to provide more accurate readings as new data comes along.

In the case of computer vision applications, the magic is in the ability to identify elements in an image without any kind of human interaction. In fact, within the past month or so, computer vision systems have finally surpassed the 95% accuracy rate as compared to specially trained humans. The way the system works is that a multi-stage neural network—that is, a series of mathematical algorithms designed to analyze images—run through a very compute-intensive set of equations that allows a computer to determine that, yes, the image on the screen is a rabbit (or a dog, or a Porsche sports car, or whatever a human brain clearly recognizes it to be). Despite the fact that this is clearly a visual application, the “response” is simply text that’s automatically inserted under an existing image.

Computer vision can be applied to an even more interesting and more invisible application: autonomous driving. Huang and Tesla CEO Elon Musk discussed the “inevitable” future of self-driving cars and the fact that the computing will be done within the car (the “client device” in this case), not in the cloud. The concept of Advanced Driver Assistance Systems (ADAS) is that a car’s motor systems can respond automatically to the visual signs that an embedded computer vision system would send it, keeping cars from getting into accidents by driving automatically.

Arguably, this kind of behind-the-scenes effort is what’s driving big data and analytics trends in servers and big corporate data centers. What’s different now is that this kind of invisible computing is moving more towards client devices. We’re seeing efforts to bring the kind of analytics and intensive data-crunching currently running on servers onto machines (or cars) that we all actually use.

The HSA Foundation is an industry organization devoted to developing standards for the general programming of CPUs and GPUs for a variety of applications. Originally founded by AMD, the organization has been working on specifications for several years, and its timing happened to nicely coincide with nVidia’s GPU news. Like nVidia, many of the more interesting applications that the HSA hopes to enable are advanced data analysis capabilities, which, in certain cases, will be done invisibly to the user.

In the case of Microsoft’s biometric login support for Windows 10 devices, it’s about taking away the need to type in an easily forgotten (or stolen) password. Instead you securely log in to not only your machine, but also the applications and services you use on it by merely using your physical presence and perhaps a gesture like swiping your finger. This is a potentially huge improvement in usability (as I wrote about in one of my 2015 predictions columns) and is a great example of how “invisible computing” can make our experience of using devices much better.

Traditional visual computing will obviously continue to be the primary modus operandi of all the computing devices we use every day—and particularly for the GPUs at the heart of nVidia’s, AMD’s, and the HSA Foundation’s advancements. After decades of enormous progress, however, it increasingly appears that the more interesting compute applications will start to turn invisible and simply improve our experience (and possibly safety) with using our growing range of connected devices.

Here's a link to the original column: https://techpinions.com/the-future-of-computing-is-invisible/39278

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in a video-based podcast called Everything Technology.
LEARN MORE
  Research Schedule
A list of the documents that TECHnalysis Research plans to publish in 2015 can be found here.
READ MORE